1,298 research outputs found

    An Empirical Analysis of Search in GSAT

    Full text link
    We describe an extensive study of search in GSAT, an approximation procedure for propositional satisfiability. GSAT performs greedy hill-climbing on the number of satisfied clauses in a truth assignment. Our experiments provide a more complete picture of GSAT's search than previous accounts. We describe in detail the two phases of search: rapid hill-climbing followed by a long plateau search. We demonstrate that when applied to randomly generated 3SAT problems, there is a very simple scaling with problem size for both the mean number of satisfied clauses and the mean branching rate. Our results allow us to make detailed numerical conjectures about the length of the hill-climbing phase, the average gradient of this phase, and to conjecture that both the average score and average branching rate decay exponentially during plateau search. We end by showing how these results can be used to direct future theoretical analysis. This work provides a case study of how computer experiments can be used to improve understanding of the theoretical properties of algorithms.Comment: See http://www.jair.org/ for any accompanying file

    Allocation in Practice

    Full text link
    How do we allocate scarcere sources? How do we fairly allocate costs? These are two pressing challenges facing society today. I discuss two recent projects at NICTA concerning resource and cost allocation. In the first, we have been working with FoodBank Local, a social startup working in collaboration with food bank charities around the world to optimise the logistics of collecting and distributing donated food. Before we can distribute this food, we must decide how to allocate it to different charities and food kitchens. This gives rise to a fair division problem with several new dimensions, rarely considered in the literature. In the second, we have been looking at cost allocation within the distribution network of a large multinational company. This also has several new dimensions rarely considered in the literature.Comment: To appear in Proc. of 37th edition of the German Conference on Artificial Intelligence (KI 2014), Springer LNC

    Trying again to fail-first

    Get PDF
    For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the ā€œFFā€ series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy

    Mean Curvature Flow of Spacelike Graphs

    Full text link
    We prove the mean curvature flow of a spacelike graph in (Ī£1ƗĪ£2,g1āˆ’g2)(\Sigma_1\times \Sigma_2, g_1-g_2) of a map f:Ī£1ā†’Ī£2f:\Sigma_1\to \Sigma_2 from a closed Riemannian manifold (Ī£1,g1)(\Sigma_1,g_1) with Ricci1>0Ricci_1> 0 to a complete Riemannian manifold (Ī£2,g2)(\Sigma_2,g_2) with bounded curvature tensor and derivatives, and with sectional curvatures satisfying K2ā‰¤K1K_2\leq K_1, remains a spacelike graph, exists for all time, and converges to a slice at infinity. We also show, with no need of the assumption K2ā‰¤K1K_2\leq K_1, that if K1>0K_1>0, or if Ricci1>0Ricci_1>0 and K2ā‰¤āˆ’cK_2\leq -c, c>0c>0 constant, any map f:Ī£1ā†’Ī£2f:\Sigma_1\to \Sigma_2 is trivially homotopic provided fāˆ—g2<Ļg1f^*g_2<\rho g_1 where Ļ=minā”Ī£1K1/supā”Ī£2K2+ā‰„0\rho=\min_{\Sigma_1}K_1/\sup_{\Sigma_2}K_2^+\geq 0, in case K1>0K_1>0, and Ļ=+āˆž\rho=+\infty in case K2ā‰¤0K_2\leq 0. This largely extends some known results for KiK_i constant and Ī£2\Sigma_2 compact, obtained using the Riemannian structure of Ī£1ƗĪ£2\Sigma_1\times \Sigma_2, and also shows how regularity theory on the mean curvature flow is simpler and more natural in pseudo-Riemannian setting then in the Riemannian one.Comment: version 5: Math.Z (online first 30 July 2010). version 4: 30 pages: we replace the condition K1ā‰„0K_1\geq 0 by the the weaker one Ricci1ā‰„0Ricci_1\geq 0. The proofs are essentially the same. We change the title to a shorter one. We add an applicatio

    Pathway choice in DNA double strand break repair: Observations of a balancing act

    Get PDF
    Proper repair of DNA double strand breaks (DSBs) is vital for the preservation of genomic integrity. There are two main pathways that repair DSBs, Homologous recombination (HR) and Non-homologous end-joining (NHEJ). HR is restricted to the S and G2 phases of the cell cycle due to the requirement for the sister chromatid as a template, while NHEJ is active throughout the cell cycle and does not rely on a template. The balance between both pathways is essential for genome stability and numerous assays have been developed to measure the efficiency of the two pathways. Several proteins are known to affect the balance between HR and NHEJ and the complexity of the break also plays a role. In this review we describe several repair assays to determine the efficiencies of both pathways. We discuss how disturbance of the balance between HR and NHEJ can lead to disease, but also how it can be exploited for cancer treatment

    Scalable Parallel Numerical Constraint Solver Using Global Load Balancing

    Full text link
    We present a scalable parallel solver for numerical constraint satisfaction problems (NCSPs). Our parallelization scheme consists of homogeneous worker solvers, each of which runs on an available core and communicates with others via the global load balancing (GLB) method. The parallel solver is implemented with X10 that provides an implementation of GLB as a library. In experiments, several NCSPs from the literature were solved and attained up to 516-fold speedup using 600 cores of the TSUBAME2.5 supercomputer.Comment: To be presented at X10'15 Worksho

    Backbone Fragility and the Local Search Cost Peak

    Full text link
    The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably effective at solving hard Random 3-SAT instances near the so-called `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over different instances. We make a number of significant contributions to the analysis of WSat on high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for local search are those with very large backbones which are also backbone-fragile. We suggest that the decay in cost beyond the satisfiability threshold is due to increasing backbone robustness (the opposite of backbone fragility). Our hypothesis makes three correct predictions. First, that the backbone robustness of an instance is negatively correlated with the local search cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. In understanding the pathologies of local search methods, we hope to contribute to the development of new and better techniques

    On The Complexity and Completeness of Static Constraints for Breaking Row and Column Symmetry

    Full text link
    We consider a common type of symmetry where we have a matrix of decision variables with interchangeable rows and columns. A simple and efficient method to deal with such row and column symmetry is to post symmetry breaking constraints like DOUBLELEX and SNAKELEX. We provide a number of positive and negative results on posting such symmetry breaking constraints. On the positive side, we prove that we can compute in polynomial time a unique representative of an equivalence class in a matrix model with row and column symmetry if the number of rows (or of columns) is bounded and in a number of other special cases. On the negative side, we show that whilst DOUBLELEX and SNAKELEX are often effective in practice, they can leave a large number of symmetric solutions in the worst case. In addition, we prove that propagating DOUBLELEX completely is NP-hard. Finally we consider how to break row, column and value symmetry, correcting a result in the literature about the safeness of combining different symmetry breaking constraints. We end with the first experimental study on how much symmetry is left by DOUBLELEX and SNAKELEX on some benchmark problems.Comment: To appear in the Proceedings of the 16th International Conference on Principles and Practice of Constraint Programming (CP 2010

    Random Costs in Combinatorial Optimization

    Full text link
    The random cost problem is the problem of finding the minimum in an exponentially long list of random numbers. By definition, this problem cannot be solved faster than by exhaustive search. It is shown that a classical NP-hard optimization problem, number partitioning, is essentially equivalent to the random cost problem. This explains the bad performance of heuristic approaches to the number partitioning problem and allows us to calculate the probability distributions of the optimum and sub-optimum costs.Comment: 4 pages, Revtex, 2 figures (eps), submitted to PR

    Phase Transition in the Number Partitioning Problem

    Full text link
    Number partitioning is an NP-complete problem of combinatorial optimization. A statistical mechanics analysis reveals the existence of a phase transition that separates the easy from the hard to solve instances and that reflects the pseudo-polynomiality of number partitioning. The phase diagram and the value of the typical ground state energy are calculated.Comment: minor changes (references, typos and discussion of results
    • ā€¦
    corecore